Skip to content

Conversation

NicholasTao
Copy link
Contributor

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

taoyuxiang and others added 2 commits July 22, 2025 16:19
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

attn_metadata: Optional[AttentionMetadata] = None) -> torch.Tensor:
qkv, _ = self.qkv_proj(hidden_states)
q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)
if type(self.rotary_emb) is RotaryEmbedding:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里逻辑改一下,是decode走cache模式,非decode走下面的else分支

)
for i in range(self.start_layer, self.end_layer):
layer = self.layers[i]
kv_c = kv_caches[i - self.start_layer] \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kv_c还是改为kv_cache吧

from vllm.model_executor.layers.rotary_embedding import (
DeepseekScalingRotaryEmbedding, RotaryEmbedding)

from vllm_ascend.ascend_config import get_ascend_config
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

deepseek_rope_init_func等几个接口删掉

@wangxiyuan
Copy link
Collaborator

no update for long time. Close it now. Feel free to create a new one if it's still needed.

@wangxiyuan wangxiyuan closed this Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants